Improved discriminative training techniques for large vocabulary continuous speech recognition
نویسندگان
چکیده
This paper investigates the use of discriminative training techniques for large vocabulary speech recogntion with training datasets up to 265 hours. Techniques for improving lattice-based Maximum Mutual Information Estimation (MMIE) training are described and compared to Frame Discrimination (FD). An objective function which is an interpolation of MMIE and standard Maximum Likelihood Estimation (MLE) is also discussed. Experimental results on both the Switchboard and North American Business News tasks show that MMIE training can yield significant performance improvements over standard MLE even for the most complex speech recognition problems with very large training sets.
منابع مشابه
Lattice segmentation and minimum Bayes risk discriminative training for large vocabulary continuous speech recognition
Lattice segmentation techniques developed for Minimum Bayes Risk decoding in large vocabulary speech recognition tasks are used to compute the statistics needed for discriminative training algorithms that estimate HMM parameters so as to reduce the overall risk over the training data. New estimation procedures are developed and evaluated for both small and large vocabulary recognition tasks, an...
متن کاملBoosting Minimum Bayes Risk Discriminative Training
A new variant of AdaBoost is applied to a Minimum Bayes Risk discriminative training procedure that directly aims at reducing Word Error Rate for Automatic Speech Recognition. Both techniques try to improve the discriminative power of a classifier and we show that can be combined together to yield even better performance on a small vocabulary continuous speech recognition task. Our results also...
متن کاملLarge Scale Discriminative Training for Speech Recognition
This paper describes, and evaluates on a large scale, the lattice based framework for discriminative training of large vocabulary speech recognition systems based on Gaussian mixture hidden Markov models (HMMs). The paper concentrates on the maximum mutual information estimation (MMIE) criterion which has been used to train HMM systems for conversational telephone speech transcription using up ...
متن کاملSelective MCE training strategy in Mandarin speech recognition
The use of discriminative training methods in speech recognition is a promising approach. The minimum classification error (MCE) based discriminative methods have been extensively studied and successfully applied to speech recognition [1][2][3], speaker recognition [4], and utterance verification [5][6]. Our goal is to modify the embedded string model based MCE algorithm to train a large number...
متن کاملImproving Discriminative Training for Robust Acoustic Models in Large Vocabulary Continuous Speech Recognition
This paper studies the robustness of discriminatively trained acoustic models for large vocabulary continuous speech recognition. Popular discriminative criteria maximum mutual information (MMI), minimum phone error (MPE), and minimum phone frame error (MPFE), are used in the experiments, which include realistic mismatched conditions from Finnish Speecon corpus and English Wall Street Journal c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2001